-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#0: Fix failing test case for width sharded non-32 multiple output width #16224
base: main
Are you sure you want to change the base?
Conversation
8db5c45
to
41070e7
Compare
ttnn/cpp/ttnn/operations/conv/conv2d/prepare_conv2d_weights.cpp
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a regression UT for this?
It would be nice to remove this test case from |
41070e7
to
1007506
Compare
@pavlejosipovic I've added a UT. It exposed some more bugs, which I have also fixed. I removed the test from the sweep. |
83628b8
to
3f80dad
Compare
@@ -191,6 +192,7 @@ Result conv2d( | |||
|
|||
if (bypass_halo) { | |||
if (input_tensor_post_tm.layout() == Layout::TILE) { | |||
input_tensor_post_tm = ttnn::reshape(input_tensor_post_tm, input_tensor_post_tm.get_padded_shape()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need this reshape here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to_layout was not using the padded shape, and was instead using the logical shape. This was causing an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at least we should log an issue with unit tests and reference it here.
Would be better if we can just fix to layout.
Halo with untialize is doing this properly?
Problem description
tests/sweep_framework/sweeps/conv2d/short/conv2d_short_sweep.py::test_conv2d_localrun_fail_only[device_params0-input_spec7]
test case fails.What's changed
Fixed width sharded weights preparation, where out_channels % 32 != 0
Checklist